skip to main content


Search for: All records

Creators/Authors contains: "Ortega, Francisco R."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Annotation in 3D user interfaces such as Augmented Reality (AR) and Virtual Reality (VR) is a challenging and promising area; however, there are not currently surveys reviewing these contributions. In order to provide a survey of annotations for Extended Reality (XR) environments, we conducted a structured literature review of papers that used annotation in their AR/VR systems from the period between 2001 and 2021. Our literature review process consists of several filtering steps which resulted in 103 XR publications with a focus on annotation. We classified these papers based on the display technologies, input devices, annotation types, target object under annotation, collaboration type, modalities, and collaborative technologies. A survey of annotation in XR is an invaluable resource for researchers and newcomers. Finally, we provide a database of the collected information for each reviewed paper. This information includes applications, the display technologies and its annotator, input devices, modalities, annotation types, interaction techniques, collaboration types, and tasks for each paper. This database provides a rapid access to collected data and gives users the ability to search or filter the required information. This survey provides a starting point for anyone interested in researching annotation in XR environments. 
    more » « less
    Free, publicly-accessible full text available June 23, 2024
  2. Elicitation studies have become a popular method of participatory design. While traditionally used to examine unimodal gesture interactions, elicitation has started being used with other novel interaction modalities. Unfortunately, there has been no work that examines the impact of referent display on elicited interaction proposals. To address that concern this work provides a detailed comparison between two elicitation studies that were similar in design apart from the way that participants were prompted for interaction proposals (i.e., the referents). Based on this comparison the impact of referent display on speech and gesture interaction proposals are each discussed. The interaction proposals between these elicitation studies were not identical. Gesture proposals were the least impacted by referent display, showing high proposal similarity between the two works. Speech proposals were highly biased by text referents with proposals directly mirroring text-based referents an average of 69.36% of the time. In short, the way that referents are presented during elicitation studies can impact the resulting interaction proposals; however, the level of impact found is dependent on the modality of input elicited. 
    more » « less
  3. null (Ed.)
    In today's era of Internet of Things (IoT), where massive amounts of data are produced by IoT and other devices, edge computing has emerged as a prominent paradigm for low-latency data processing. However, applications may have diverse latency requirements: certain latency-sensitive processing operations may need to be performed at the edge, while delay-tolerant operations can be performed on the cloud, without occupying the potentially limited edge computing resources. To achieve that, we envision an environment where computing resources are distributed across edge and cloud offerings. In this paper, we present the design of CLEDGE (CLoud + EDGE), an information-centric hybrid cloud-edge framework, aiming to maximize the on-time completion of computational tasks offloaded by applications with diverse latency requirements. The design of CLEDGE is motivated by the networking challenges that mixed reality researchers face. Our evaluation demonstrates that CLEDGE can complete on-time more than 90% of offloaded tasks with modest overheads. 
    more » « less
  4. null (Ed.)
    Objective Evaluate and model the advantage of a situation awareness (SA) supported by an augmented reality (AR) display for the ground-based joint terminal attack Controller (JTAC), in judging and describing the spatial relations between objects in a hostile zone. Background The accurate world-referenced description of relative locations of surface objects, when viewed from an oblique slant angle (aircraft, observation post) is hindered by (1) the compression of the visual scene, amplified at a lower slang angle, (2) the need for mental rotation, when viewed from a non-northerly orientation. Approach Participants viewed a virtual reality (VR)-simulated four-object scene from either of two slant angles, at each of four compass orientations, either unaided, or aided by an AR head-mounted display (AR-HMD), depicting the scene from a top-down (avoiding compression) and north-up (avoiding mental rotation) perspective. They described the geographical layout of four objects within the display. Results Compared with the control condition, that condition supported by the north-up SA display shortened the description time, particularly on non-northerly orientations (9 s, 30% benefit), and improved the accuracy of description, particularly for the more compressed scene (lower slant angle), as fit by a simple computational model. Conclusion The SA display provides large, significant benefits to this critical phase of ground-air communications in managing an attack—as predicted by the task analysis of the JTAC. Application Results impact the design of the AR-HMD to support combat ground-air communications and illustrate the magnitude by which basic cognitive principles “scale up” to realistically simulated real-world tasks such as search and rescue. 
    more » « less
  5. null (Ed.)
  6. The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.

     
    more » « less
  7. null (Ed.)
    This research establishes a better understanding of the syntax choices in speech interactions and of how speech, gesture, and multimodal gesture and speech interactions are produced by users in unconstrained object manipulation environments using augmented reality. The work presents a multimodal elicitation study conducted with 24 participants. The canonical referents for translation, rotation, and scale were used along with some abstract referents (create, destroy, and select). In this study time windows for gesture and speech multimodal interactions are developed using the start and stop times of gestures and speech as well as the stoke times for gestures. While gestures commonly precede speech by 81 ms we find that the stroke of the gesture is commonly within 10 ms of the start of speech. Indicating that the information content of a gesture and its co-occurring speech are well aligned to each other. Lastly, the trends across the most common proposals for each modality are examined. Showing that the disagreement between proposals is often caused by a variation of hand posture or syntax. Allowing us to present aliasing recommendations to increase the percentage of users' natural interactions captured by future multimodal interactive systems. 
    more » « less
  8. null (Ed.)
  9. null (Ed.)